Towards Faster Training Algorithms Exploiting Bandit Sampling From Convex to Strongly Convex Conditions

نویسندگان

چکیده

The training process for deep learning and pattern recognition normally involves the use of convex strongly optimization algorithms such as AdaBelief SAdam to handle lots “uninformative” samples that should be ignored, thus incurring extra calculations. To solve this open problem, we propose design bandit sampling method make these focus on “informative” during process. Our contribution is twofold: first, a algorithm with sampling, termed AdaBeliefBS, prove it converges faster than its original version; second, works well algorithms, generalized SAdam, called SAdamBS, SAdam. Finally, conduct series experiments various benchmark datasets verify fast convergence rate our proposed algorithms.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bandit Convex Optimization: Towards Tight Bounds

Bandit Convex Optimization (BCO) is a fundamental framework for decision making under uncertainty, which generalizes many problems from the realm of online and statistical learning. While the special case of linear cost functions is well understood, a gap on the attainable regret for BCO with nonlinear losses remains an important open question. In this paper we take a step towards understanding...

متن کامل

Optimistic Bandit Convex Optimization

We introduce the general and powerful scheme of predicting information re-use in optimization algorithms. This allows us to devise a computationally efficient algorithm for bandit convex optimization with new state-of-the-art guarantees for both Lipschitz loss functions and loss functions with Lipschitz gradients. This is the first algorithm admitting both a polynomial time complexity and a reg...

متن کامل

Natasha: Faster Non-Convex Stochastic Optimization via Strongly Non-Convex Parameter

Given a nonconvex function f(x) that is an average of n smooth functions, we design stochastic first-order methods to find its approximate stationary points. The performance of our new methods depend on the smallest (negative) eigenvalue −σ of the Hessian. This parameter σ captures how strongly nonconvex f(x) is, and is analogous to the strong convexity parameter for convex optimization. At lea...

متن کامل

An Empirical Analysis of Bandit Convex Optimization Algorithms

We perform an empirical analysis of bandit convex optimization (BCO) algorithms. We motivate and introduce multi-armed bandits, and explore the scenario where the player faces an adversary that assigns different losses. In particular, we describe adversaries that assign linear losses as well as general convex losses. We then implement various BCO algorithms in the unconstrained setting and nume...

متن کامل

Adaptive Algorithms and Data-Dependent Guarantees for Bandit Convex Optimization

We present adaptive algorithms with strong datadependent regret guarantees for the problem of bandit convex optimization. In the process, we develop a general framework from which the main previous results in this setting can be recovered. The key method is the introduction of adaptive regularization. By appropriately adapting the exploration scheme, we show that one can derive regret guarantee...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE transactions on emerging topics in computational intelligence

سال: 2023

ISSN: ['2471-285X']

DOI: https://doi.org/10.1109/tetci.2022.3171797